Goto

Collaborating Authors

 AAAI AI-Alert for Dec 14, 2021


The 'Invisible', Often Unhappy Workforce That's Deciding the Future of AI

#artificialintelligence

Two new reports, including a paper led by Google Research, express concern that the current trend to rely on a cheap and often disempowered pool of random global gig workers to create ground truth for machine learning systems could have major downstream implications for AI. Among a range of conclusions, the Google study finds that the crowdworkers' own biases are likely to become embedded into the AI systems whose ground truths will be based on their responses; that widespread unfair work practices (including in the US) on crowdworking platforms are likely to degrade the quality of responses; and that the'consensus' system (effectively a'mini-election' for some piece of ground truth that will influence downstream AI systems) which currently resolves disputes can actually throw away the best and/or most informed responses. That's the bad news; the worse news is that pretty much all the remedies are expensive, time-consuming, or both. The first paper, from five Google researchers, is called Whose Ground Truth? Accounting for Individual and Collective Identities Underlying Dataset Annotation; the second, from two researchers at Syracuse University in New York, is called The Origin and Value of Disagreement Among Data Labelers: A Case Study of Individual Differences in Hate Speech Annotation.


This tiny AI-powered robot is learning to explore the ocean on its own

#artificialintelligence

The ocean is big, and our attempts to understand it are still largely surface-deep. According to the National Oceanic and Atmospheric Organization, around 80 percent of the big blue is "unmapped, unobserved, and unexplored." Ships are the primary way to collect information about the seas, but they're costly to send out frequently. More recently, robotic buoys called Argo floats have been drifting with the currents, diving up and down to take a variety of measurements at depths up to 6,500 feet. But new aquatic robots from a lab at Caltech could rove deeper and take on more tailored underwater missions.


Twitter Cortex Proposes LMSOC for Socially Sensitive Pretraining

#artificialintelligence

A phrase like "It's cold today" would suggest a very different temperature if it were uttered in Nairobi or Montreal, while words like "troll" and "tweet" referred to totally different things just a generation ago. Although contemporary large-scale pretrained language models are very effective at learning linguistic representations, they are not as well equipped at capturing speaker/author-related temporal, geographical, social and other contextual aspects. In the new paper LMSOC: An Approach for Socially Sensitive Pretraining, a Twitter Cortex research team proposes LMSOC, a simple but effective approach for learning both linguistically contextualized and socially sensitive representations in large-scale language models. An implicit assumption in most pretrained language models (PLMs) is that language is independent of extra-linguistic contexts such as speaker/author identity and social settings. Despite the impressive achievements of PLMs, this remains a critical weakness, as there is strong evidence that socio-linguistics can significantly impact social context processing performance.


Infinite Memory Transformer: Attending to Arbitrarily Long Contexts Without Increasing Computation Burden

#artificialintelligence

When reading a novel, humans naturally remember relevant plot information even if it was presented many chapters earlier. Although today's transformer-based language models have made impressive progress in natural language processing, they struggle in this regard, as the compute required for modelling long-term memories grows quadratically with the length of the text and will eventually exceed the model's finite memory capacity. To overcome this limitation, a research team from Instituto de Telecomunicações, DeepMind, Institute of Systems and Robotics, Instituto Superior Técnico and Unbabel has proposed " -former" (infinite former) -- a transformer model equipped with unbounded long-term memory (LTM) that enables it to attend to arbitrarily long contexts. The team extends the vanilla transformer with a continuous LTM to enable their proposed -former to access long-range context. The novel approach employs a continuous space attention framework to attend over the LTM signal, in which key matrix size depends on the number of basis functions instead of the length of the context being attended to.


DeepMind AI tackles one of chemistry's most valuable techniques

#artificialintelligence

The AI predicts the distribution of electrons within a molecule (illustration) and uses it to calculate physical properties.Credit: DeepMind A team led by scientists at the London-based artificial-intelligence company DeepMind has developed a machine-learning model that suggests a molecule's characteristics by predicting the distribution of electrons within it. The approach, described in the 10 December issue of Science1, can calculate the properties of some molecules more accurately than existing techniques. "To make it as accurate as they have done is a feat," says Anatole von Lilienfeld, a materials scientist at the University of Vienna. The paper is "a solid piece of work", says Katarzyna Pernal, a computational chemist at Lodz University of Technology in Poland. But she adds that the machine-learning model has a long way to go before it can be useful for computational chemists.


This Air Force Targeting AI Thought It Had a 90% Success Rate. It Was More Like 25%

#artificialintelligence

If the Pentagon is going to rely on algorithms and artificial intelligence, it's got to solve the problem of "brittle AI." A top Air Force official recently illustrated just how far there is to go. Gen. Daniel Simpson, assistant deputy chief of staff for intelligence, surveillance, and reconnaissance, said on Monday. Initially, the AI was fed data from a sensor that looked for a single surface-to-surface missile at an oblique angle, Simpson said. Then it was fed data from another sensor that looked for multiple missiles at a near-vertical angle.


Taking Some Guesswork Out of Drug Discovery

#artificialintelligence

Researchers at the Massachusetts Institute of Technology have developed a deep learning model that can rapidly predict the likely three-dimensional shape of a molecule, given a two-dimensional graph of its structure. The deep learning GeoMol model developed by Massachusetts Institute of Technology (MIT) researchers can rapidly predict the three-dimensional shapes of drug-like molecules, which could expedite drug discovery. GeoMol's predictions are based solely on two-dimensional molecular graphs, and it can process molecules in seconds while outperforming other machine learning models, according to the researchers. The system utilizes a message passing neural network to forecast the lengths of chemical bonds between atoms and those bonds' angles; GeoMol then predicts the structure of each atom's local neighborhood and constructs neighboring pairs of rotatable bonds by computing and aligning torsion angles. MIT's Octavian-Eugen Ganea said GeoMol could help drugmakers indentify new drugs faster by reducing the number of molecules on which they must experiment.


Researchers develop tiny camera the size of a grain of salt - and it could turn your phone into one big camera

The Independent - Tech

Researchers have created an ultracompact camera the size of a grain of salt capable of producing pictures on par with lenses hundreds of thousands of times larger than it. Engineers from Princeton University and the University of Washington say that the camera can produce full-colour images that could be used in collaboration with medical robots to diagnose and treat diseases. Traditional cameras use curved glass or plastic to bend light rays, this new camera uses'metasurface' technology which is produced like a computer chip. The metasurface of this particular camera has 1.6 million cylindrical posts – each approximately the size of a virus – to make up a system just half a millimetre wide. Each of those posts has its own unique geometry, working like an optical antenna, and machine-learning algorithms can use the posts' combined interactions with light to create high-quality images.